219 research outputs found
Attribute Regularized Soft Introspective VAE: Towards Cardiac Attribute Regularization Through MRI Domains
Deep generative models have emerged as influential instruments for data
generation and manipulation. Enhancing the controllability of these models by
selectively modifying data attributes has been a recent focus. Variational
Autoencoders (VAEs) have shown promise in capturing hidden attributes but often
produce blurry reconstructions. Controlling these attributes through different
imaging domains is difficult in medical imaging. Recently, Soft Introspective
VAE leverage the benefits of both VAEs and Generative Adversarial Networks
(GANs), which have demonstrated impressive image synthesis capabilities, by
incorporating an adversarial loss into VAE training. In this work, we propose
the Attributed Soft Introspective VAE (Attri-SIVAE) by incorporating an
attribute regularized loss, into the Soft-Intro VAE framework. We evaluate
experimentally the proposed method on cardiac MRI data from different domains,
such as various scanner vendors and acquisition centers. The proposed method
achieves similar performance in terms of reconstruction and regularization
compared to the state-of-the-art Attributed regularized VAE but additionally
also succeeds in keeping the same regularization level when tested on a
different dataset, unlike the compared method
AtrialGeneral: Domain Generalization for Left Atrial Segmentation of Multi-Center LGE MRIs
Left atrial (LA) segmentation from late gadolinium enhanced magnetic
resonance imaging (LGE MRI) is a crucial step needed for planning the treatment
of atrial fibrillation. However, automatic LA segmentation from LGE MRI is
still challenging, due to the poor image quality, high variability in LA
shapes, and unclear LA boundary. Though deep learning-based methods can provide
promising LA segmentation results, they often generalize poorly to unseen
domains, such as data from different scanners and/or sites. In this work, we
collect 210 LGE MRIs from different centers with different levels of image
quality. To evaluate the domain generalization ability of models on the LA
segmentation task, we employ four commonly used semantic segmentation networks
for the LA segmentation from multi-center LGE MRIs. Besides, we investigate
three domain generalization strategies, i.e., histogram matching, mutual
information based disentangled representation, and random style transfer, where
a simple histogram matching is proved to be most effective.Comment: 10 pages, 4 figures, MICCAI202
Medical Image Analysis on Left Atrial LGE MRI for Atrial Fibrillation Studies: A Review
Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly
used to visualize and quantify left atrial (LA) scars. The position and extent
of scars provide important information of the pathophysiology and progression
of atrial fibrillation (AF). Hence, LA scar segmentation and quantification
from LGE MRI can be useful in computer-assisted diagnosis and treatment
stratification of AF patients. Since manual delineation can be time-consuming
and subject to intra- and inter-expert variability, automating this computing
is highly desired, which nevertheless is still challenging and
under-researched.
This paper aims to provide a systematic review on computing methods for LA
cavity, wall, scar and ablation gap segmentation and quantification from LGE
MRI, and the related literature for AF studies. Specifically, we first
summarize AF-related imaging techniques, particularly LGE MRI. Then, we review
the methodologies of the four computing tasks in detail, and summarize the
validation strategies applied in each task. Finally, the possible future
developments are outlined, with a brief survey on the potential clinical
applications of the aforementioned methods. The review shows that the research
into this topic is still in early stages. Although several methods have been
proposed, especially for LA segmentation, there is still large scope for
further algorithmic developments due to performance issues related to the high
variability of enhancement appearance and differences in image acquisition.Comment: 23 page
On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences
Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results
3D Masked Autoencoders with Application to Anomaly Detection in Non-Contrast Enhanced Breast MRI
Self-supervised models allow (pre-)training on unlabeled data and therefore
have the potential to overcome the need for large annotated cohorts. One
leading self-supervised model is the masked autoencoder (MAE) which was
developed on natural imaging data. The MAE is masking out a high fraction of
visual transformer (ViT) input patches, to then recover the uncorrupted images
as a pretraining task. In this work, we extend MAE to perform anomaly detection
on breast magnetic resonance imaging (MRI). This new model, coined masked
autoencoder for medical imaging (MAEMI) is trained on two non-contrast enhanced
MRI sequences, aiming at lesion detection without the need for intravenous
injection of contrast media and temporal image acquisition. During training,
only non-cancerous images are presented to the model, with the purpose of
localizing anomalous tumor regions during test time. We use a public dataset
for model development. Performance of the architecture is evaluated in
reference to subtraction images created from dynamic contrast enhanced
(DCE)-MRI
- ā¦